Previous Blogs

March 9, 2023
Lenovo Revs Desktop Workstations with Aston Martin

March 1, 2023
MWC Analysis: The Computerized, Cloudified 5G Network is Getting Real

February 23, 2023
Early MWC News Shows Renewed Emphasis on 5G Infrastructure

February 1, 2023
Samsung Looking to Impact the PC Market

January 18, 2023
The Surprise Winner for Generative AI

January 5, 2023
AI To Go Mainstream in 2023

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

March 14, 2023
Google Unveils Generative AI Tools for Workspace and GCP

By Bob O'Donnell

As a long-time electronic musician (and the former editor of both Electronic Musician and Music Technology magazines), I’ve always been enamored with musical synthesizers. Leveraging a specialized set of circuits, these instruments are designed to generate an enormous array of intriguing sounds from relatively basic raw sonic material. In several ways, today’s rapidly growing crop of generative AI tools bear some interesting resemblances to them in that they can synthesize very impressive content from combinations of simple word-like “tokens” (albeit billions of them!). Generative AI tools are, in a very real sense, content synthesizers.

The latest entry to the content synthesis fray came via Google, which is bringing an impressive array of new offerings, capabilities, and roadmaps to the market via updates to its Vertex AI platform for Google Cloud and its Google Workspace productivity suite. After letting Microsoft (along with its ChatGPT partnership with OpenAI) take much of the attention over the last few weeks—to the point where articles questioning Google’s ambitions for generative AI even began to appear—it is clear that the company long perceived as being an AI leader has not been resting on its laurels.

Indeed, today’s debut offers a comprehensive set of applications, services, and interesting new approaches that make it clear that Google has no intention of ceding the generative AI market to anyone. Specifically, the company unveiled several new capabilities for its Vertex AI suite for Google Cloud, a new Generative AI App Builder for professional developers, a range of upcoming capabilities for all the productivity apps in Google Workspace, the Maker Suite for less experienced “citizen developers”, a new PaLM large language model (LLM), and the ability to integrate a variety of 3rd party applications and even LLMs into its collection of offerings. Quite frankly, it’s a nearly overwhelming amount of information to take in at a single setting, but it proves, if nothing else, that a lot of people at Google have been working on these developments for a very long time.

To be clear, not all of the capabilities will be available immediately—Google laid out a vision of some things it has now and shared where it’s headed in the future—but in the incredibly dynamic market that is generative AI, the company clearly felt compelled to make a statement. And make a statement they most assuredly did.

Some of the most interesting aspects of the Google vision for generative AI are around openness and the ability to collaborate with other companies. For example, Google talked about the idea of a foundation model “zoo” where different LLMs could essentially be plugged into different applications. So, for example, while you could certainly use Google’s newly upgraded PaLM (Pathways Language Model) text or PaLM chat models in enterprise applications via API calls, you could also use other 3rd party or even open source LLMs in their place. The degree of flexibility and customizability was impressive with different LLMs, though I also couldn’t help but think that corporate IT departments could quickly start getting overwhelmed by the range of choices that would be available. Given the inevitable demands for testing and compliance, there might be some value in limiting the number of options that organizations can use (at least initially).

Even better, Google made a big point of emphasizing that organizations could integrate their own data on top of Google’s (or presumably others’) LLMs to make them customized to the unique needs of an organization. For example, companies could ingest some of their own original content, images, styles, etc., into an existing LLM, and that custom model could then be used as the core generative AI engine for an organization’s content synthesis applications. By itself, these customizations could prove to be particularly appealing to many organizations.

There were also a lot of announcements about partnerships that Google has with a variety of different vendors, from little-known AI startups like AI21Labs and Osmo to quickly rising developers, such as code generation toolmaker Replit or LLM developers Anthropic and Cohere. On the generative image side of things, they highlighted work they’ve been doing with Midjourney, which not only allows initial creation of images via text-based descriptions, but text-based edits and refinements as well.

Google also made a point of emphasizing the customizability within existing models. Specifically, the company demonstrated how individuals could adjust different model parameter settings as part of their initial query to the tool to set the level of accuracy, creativity, flexibility and more that they could expect from the output. Unfortunately, in classic Google style, very engineering-specific terms were used for some of these parameters making it unclear whether regular users will actually be able to make sense of them. However, the concept behind it is great, and thankfully, parameter wording can be edited.

Speaking of editing, some of the most interesting content-based demos that Google illustrated for Workspace involved the ability to edit existing content (say, from a more formal written tone to a more casual one) or extrapolate from a relatively limited input prompt. Admittedly, other generative AI tools have shown these kinds of capabilities, but the UI and overall experience model that Google showed looked very intuitive.

In addition to software, Google also touched upon the hardware side of the Google Cloud infrastructure that’s able to support all these efforts for both Vertex AI and Workspace. Specifically, the company noted how many of these workloads are powered by various combinations of their own TPUs (Tensor Processing Units) as well as Nvidia’s powerful GPUs. While much of the focus on generative AI applications has only been on the software, there is little doubt that hardware innovations in the semiconductor and server space will continue to have an extremely large impact on these generative AI developments.

Returning to the synthesizer analogy, the advancements in LLMs that Google’s new offerings highlight in many ways reflect the diversity of different sound engines and architectures used to design them. Just as there are many types of synthesizers, with the primary differences coming from the raw source material used in the sound engine and the signal flow through which they proceed, so too do I expect to see more variety in foundational LLMs. There will likely be a diversity of source materials used for various models and different architectures through which will they’ll be processed. Similarly, the degree of “programmability” will likely vary quite a bit as well, from a modest number of preset options to the complete (but potentially overwhelming) flexibility of modularity—just as is found in the world of synthesizers.

From an availability perspective, many of Google’s latest capabilities are initially limited to a set of trusted testers, and pricing (and even purchase options) for these services are still unannounced. For regular users, some of the text-based content generation tools in Docs and Gmail will likely be the first taste of Google-driven generative AI that many are likely to experience. And like Microsoft, future iterations and enhancements will undoubtedly come at a very rapid pace.

In fact, there is little doubt that we’ve entered an enormously exciting and tremendously competitive new era in the enterprise computing and overall tech world. Generative AI-based tools have sparked a mind-blowing range of potential new applications and productivity enhancements that we’re really just starting to get our minds around. As with many big tech trends, there’s little doubt that it’s also being overhyped, just as it comes to life. At the same time, much more than with the arguably flawed Bard demonstration, it’s clear with these announcements that Google has now firmly placed a stake in the ground of the rapidly evolving world of generative AI tools and services. What happens next isn’t at all clear, but it’s going to be incredibly exciting to watch.

Here's a link to the original column: https://www.linkedin.com/pulse/google-unveils-generative-ai-tools-workspace-gcp-bob-o-donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.